-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
feat: introduce App::run_return
#12668
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: introduce App::run_return
#12668
Conversation
Package Changes Through 15002c0There are 12 changes which include tauri with minor, tauri-runtime with minor, tauri-runtime-wry with minor, tauri-utils with minor, tauri-cli with minor, @tauri-apps/cli with minor, @tauri-apps/api with minor, tauri-bundler with patch, tauri-build with minor, tauri-codegen with minor, tauri-macros with minor, tauri-plugin with minor Planned Package VersionsThe following package releases are the planned based on the context of changes in this pull request.
Add another change file through the GitHub UI by following this link. Read about change files or the docs at github.com/jbolda/covector |
Do we want to deprecate |
5b41a90
to
e48e8c3
Compare
crates/tauri-runtime-wry/src/lib.rs
Outdated
@@ -2840,13 +2843,13 @@ impl<T: UserEvent> Runtime<T> for Wry<T> { | |||
let active_tracing_spans = self.context.main_thread.active_tracing_spans.clone(); | |||
let proxy = self.event_loop.create_proxy(); | |||
|
|||
self.event_loop.run(move |event, event_loop, control_flow| { | |||
self.event_loop.run_return(move |e, event_loop, cf| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I renamed these variables here so that this stays on one line. That way, the git diff shows nicely that run_return
is effectively what run
used to be.
crates/tauri/src/app.rs
Outdated
pub fn run_return<F: FnMut(&AppHandle<R>, RunEvent) + 'static>( | ||
mut self, | ||
mut callback: F, | ||
) -> std::result::Result<i32, Box<dyn std::error::Error>> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've just used the same error as from the setup callback. It is missing Send + Sync + 'static
bounds unfortunately but that is already the case and can't be changed now I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just tried it locally, and crate::Result<i32>
works for me.
imo yes |
iirc run_return also works on android (though probably good to test that first), only iOS should be a problem. I'm wondering whether we should really go for the cfg flag you used or just try to do what winit for example does and just document that it doesn't return ever on iOS (using tao's run instead of run_return for iOS internally). didn't look too much into the code yet so idk how feasible that is. |
Removing the cfg is a semver-compatible change so we can always do that later? I'd prefer an incremental approach if possible! :)
Deciding on what the mobile story is here seems like a different problem to me that I'd rather not tackle, also because I don't know the internals of tao and Tauri well enough :) |
|
What can be done to push this forward? I really want to see this feature in |
Unfortunately, the repository settings prevent me from effectively iterating on this because CI has to be approved constantly and running |
Wasn't as much work as I thought it would be (wuhu AI!) so iterating locally now. |
Should be compiling now. Some of the tests also fail on |
Yey all green! |
crates/tauri/src/app.rs
Outdated
pub fn run_return<F: FnMut(&AppHandle<R>, RunEvent) + 'static>( | ||
mut self, | ||
mut callback: F, | ||
) -> std::result::Result<i32, Box<dyn std::error::Error>> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just tried it locally, and crate::Result<i32>
works for me.
@FabianLars @WSH032 Ready for another review round. I've reverted the change of returning Refactoring this can be left to a future PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've reverted the change of returning Result from run_return
I have no opinion on this. I just think that if run_return
does return a Result
, I would prefer it to be tauri::Result<T>
(tauri::Error::Setup
) rather than std::result::Result<T, Box<dyn Error>>
.
Regarding the lifecycle, we should look at @FabianLars's opinion.
I agree. Unfortunately, returning a |
I think this is possible but haven't looked at the code of all platforms. It is more involved than what I'd like to do at the moment. |
It is more correct* and i'd consider it a bug in run_iteration that it didn't do that as well.
(take this with a grain of salt. i'm on my phone so basically blind) What has the callback to do with that? As far as i can see you'd just return from run_return where rn there's a panic completely ignoring the callback. The event loop should get stopped simply by dropping it (which we do by returning) I don't really care about the error type we return, i'd just like to get rid of the panic here tbh. If this is not possible, then the function's documentation should mention the panic just in case. |
In my usecase, I am calling |
I guess we could extend the internal "cleanup before exit" handler perhaps? |
In my case, I'm triggering "exit" by clicking the close icon on the last window. I'm not sure how to make that happen earlier or later. But after the window closes, and before the process can exit, I need to make sure other resources get cleaned up. Like the |
I tried poking around the Tauri code a bit to see if I could find a solution. Maybe I have something? This re-implementation of #[cfg(desktop)]
fn run_return<F: FnMut(RunEvent<T>) + 'static>(mut self, callback: F) -> i32 {
use tao::platform::run_return::EventLoopExtRunReturn;
let event_handler = make_event_handler(&self, callback);
let exit = self.event_loop.run_return(event_handler);
// after the event loop is ready to exit, run one more iteration to dispatch any cleanup messages
// otherwise, the windows will become orphaned from the Rust code and remain visible until the process exits
self.run_iteration(|_| {});
exit
} This seems to work in my case. Is this an ok thing to do in general? If not, I don't really know why it works, but maybe someone else knows a better way to get the cleanup to finish out. |
We want to deprecate I am happy for other people to jump in here, I am not gonna get to work on this for a few days. |
I finally found a way to do async state cleanup while Tauri's event loop is still running, so it's no longer necessary to rely on Tauri's internal cleanup to finish successfully. I'll share the workaround here for posterity, in case it might help anyone else: fn main() {
// start the tokio runtime
tokio::runtime::Builder::new_multi_thread()
.build()
.unwrap()
.block_on(async {
tauri::async_runtime::set(tokio::runtime::Handle::current());
// create our state
let state = MyState::new();
// run the tauri app
tauri::Builder::default()
.manage(state)
.invoke_handler(...)
.setup(...)
.build(...)
.unwrap()
.run({
let mut exiting = false;
move |app, event| match event {
tauri::RunEvent::ExitRequested { api, .. } => {
// if we've already been here before, just let the event loop exit
if exiting {
return;
}
exiting = true;
// don't exit just yet: still need to cleanup
api.prevent_exit();
// start a task to cleanup the state, since we need async
tokio::task::spawn({
let app = app.clone();
async move {
// get a reference to our state from Tauri
let state = app.state::<MyState>()
.deref()
.clone();
// clean up the state
state.close()
.await
.unwrap();
// ask tauri event loop to exit again
app.exit(0)
}
});
}
_ => ()
}
});
})
} Error handling and Tauri configuration were omitted, for brevity. This pattern should work on the release version of Tauri too. |
In our case, the equivalent of your I am almost certain we can achieve the same thing with some modifications to the internals of |
.changes/run-return-mobile.md
Outdated
"tauri-runtime": minor:feat | ||
--- | ||
|
||
`Runtime::run_return` now must also be implemented on mobile targets. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this can be omitted because the function is only added in this PR (see other changelog entry?)
So I've tested this current implementation against our app again and I cannot reproduce the problem you are describing @cuchaz. Here is how I am using this: https://github.com/firezone/firezone/blob/8e3558653efdff5c9e31bc6b19d2bc5af7d06848/rust/gui-client/src-tauri/src/client/gui.rs#L116-L342 I think the important part here is:
I think the above could also be amended to close the application once the last window is closed if you still call From my perspective, this PR doesn't need any further implementation patches. I think the way it works makes sense. What we might want is a more elaborate example / tutorial to show people, how to implement a graceful exit with Tauri. @cuchaz Could you try if the following fixes your problem? .run_return(move |handle, event| match event {
tauri::RunEvent::ExitRequested {
api, code: None, ..
} => {
api.prevent_exit();
handle.exit(0);
}
tauri::RunEvent::MenuEvent(event)
| tauri::RunEvent::Exit
| tauri::RunEvent::WindowEvent { .. }
| tauri::RunEvent::WebviewEvent { .. }
| tauri::RunEvent::Ready
| tauri::RunEvent::Resumed
| tauri::RunEvent::MainEventsCleared
| tauri::RunEvent::TrayIconEvent(_)
| _ => {}
}); I know it looks a bit silly but effectively, this should allow the event-loop to process the final "close" click of the window but also immediately schedule another trigger of exit (this time with an exit code) which will not be prevented and will exit the app. |
Yey! Thanks everyone for collaborating on this :) |
Wow, I am somehow unable to reproduce the lingering window problem anymore. The only difference on my end between then and now is I applied some OS updates and rebooted my computer. Sometimes, my terminal (gnome-terminal) hangs instead of opening a window, but rebooting causes the problem to go away. xterm doesn't suffer from the same problem though. I just assumed that gnome-terminal hanging was unrelated to this Tauri issue, but now I'm not so sure. Maybe there was a bug in my OS's windowing system that was fixed in a recent update? Seems unlikely, but maybe it's possible. This is super weird. I'm not crazy, I promise. =) Anyway, thanks for all your hard work. I'm looking forward to the next release. |
At present, the Windows and Linux GUI client launch the Tauri application via the `App::run` method. This function never returns again. Instead, whenever we request the Tauri app to exit, Tauri will internally call `std::process::exit`, thus preventing ordinary clean-up from happening. Whilst we somehow managed to work around this particular part, having the app exit the process internally also makes error handling and reporting to the user difficult as there are now two parts in the code where we need to handle errors: - Before we start up the Tauri app - Before we end the Tauri app (i.e. signal to it that we want to exit) It would be much easier to understand, if we could call into Tauri, let it do its thing and upon a requested exit by the user, the called function (i.e. `App::run`) simply returns again. After diving into the inner workings of Tauri, we have achieved just that by adding a new function to `App`: `App::run_return` (tauri-apps/tauri#12668). Using `App::run_return` we can now orchestrate a `gui::run` function that simply returns after Tauri has shutdown. Most importantly, it will also exit upon any fatal errors that we encounter in the controller and thus unify the error handling path into a single one. These errors are now all handled at the call-site of `gui::run`. Building on top of this, we will be able to further simplify the error handling within the GUI client. I am hoping to gradually replace our monolithic `Error` enums with individual errors that we can extract from an `anyhow::Error`. This would make it easier to reason about where certain errors get generated and thus overall improve the UX of the application by displaying better error messages, not failing the entire app in certain cases, etc.
At present, the Windows and Linux GUI client launch the Tauri application via the `App::run` method. This function never returns again. Instead, whenever we request the Tauri app to exit, Tauri will internally call `std::process::exit`, thus preventing ordinary clean-up from happening. Whilst we somehow managed to work around this particular part, having the app exit the process internally also makes error handling and reporting to the user difficult as there are now two parts in the code where we need to handle errors: - Before we start up the Tauri app - Before we end the Tauri app (i.e. signal to it that we want to exit) It would be much easier to understand, if we could call into Tauri, let it do its thing and upon a requested exit by the user, the called function (i.e. `App::run`) simply returns again. After diving into the inner workings of Tauri, we have achieved just that by adding a new function to `App`: `App::run_return` (tauri-apps/tauri#12668). Using `App::run_return` we can now orchestrate a `gui::run` function that simply returns after Tauri has shutdown. Most importantly, it will also exit upon any fatal errors that we encounter in the controller and thus unify the error handling path into a single one. These errors are now all handled at the call-site of `gui::run`. Building on top of this, we will be able to further simplify the error handling within the GUI client. I am hoping to gradually replace our monolithic `Error` enums with individual errors that we can extract from an `anyhow::Error`. This would make it easier to reason about where certain errors get generated and thus overall improve the UX of the application by displaying better error messages, not failing the entire app in certain cases, etc.
I was excited to try the new 2.4 release and tragically, the lingering window issue has returned! I figured out why I wasn't able to reproduce the issue in my last message. It's because I forgot my local clone of Tauri had the fix I described earlier applied to it, so the lingering issue didn't happen in that case. I am able to reliably reproduce this lingering window issue using the latest 2.4 release. Here's a complete minimal working example, if you'd like to use it to help with debugging. Since the issue is with windowing systems, it's very likely the issue is specific to the operating system. Here's my OS info:
Since Linux Mint is based on Ubuntu (which is based on Debian), I'd be pretty surprised if the issue wasn't reproducible in Ubuntu (or Debian) too. The fn main() {
println!("Start");
tauri::Builder::default()
.build(tauri::generate_context!())
.expect("Failed to build Tauri app")
.run_return(|_, _| {});
println!("Tauri exited, but window is still visible during sleep!");
std::thread::sleep(std::time::Duration::from_secs(3));
println!("Finished");
} There are no async tasks in setup, like you mentioned earlier, or nested futures or anything like that. This code is completely sync. To reproduce the issue, build and run the complete example project from the link above. After the Tauri window appears, click the close button. While the code is Let me know if you have any other questions. I'll do what I can do help out. |
Have you tried the suggestion I made? Match on |
Last time I looked at the source code, |
That is only when it fails! Under normal circumstances, this performs an ordenary shutdown. |
See Lines 530 to 537 in dd13728
|
And what happens if there's an error? It's not acceptable in my app to skip cleanup even if there's an error in Tauri. Rust gives us the tools to handle these cases correctly. We should use them. |
I haven't look at the details but I'd assume this only happens when a channel is closed which I guess never happens in practice? You'd have to trace the code-path. We should probably remove that exit as well so we can panic in the event handler and force a regular unwind. I didn't try that though. |
The current
App::run_iteration
function is buggy because it busy-loops. To address the use-case of cleaning up resources in the host program, we introduceApp::run_return
which builds on top oftao
'sEventloop::run_return
.Related: #8631.